Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Low dose CT image enhancement based on generative adversarial network
HU Ziqi, XIE Kai, WEN Chang, LI Meiran, HE Jianbiao
Journal of Computer Applications    2023, 43 (1): 280-288.   DOI: 10.11772/j.issn.1001-9081.2021101710
Abstract323)   HTML12)    PDF (7479KB)(161)       Save
In order to remove the noise in Low Dose Computed Tomography (LDCT) images and enhance the display effect of the denoised images, an LDCT image enhancement algorithm based on Generative Adversarial Network (GAN) was proposed. Firstly, GAN was combined with perceptual loss and structure loss to denoise the LDCT image. Then, dynamic gray?scale enhancement and edge contour enhancement were performed to the denoised image respectively. Finally, Non?Subsampled Contourlet Transform (NSCT) was used to decompose the enhanced image into multi?directional coefficient sub?images in the frequency domain, and the paired high? and low?frequency sub?images were adaptively fused with Convolutional Neural Network (CNN) to reconstruct the enhanced Computed Tomography (CT) image. Using the real clinical data of the AAPM competition as the experimental dataset, the image denoising, enhancement, and fusion experiments were carried out. The results of the proposed method are 33.015 5 dB, 0.918 5, and 5.99 on Peak Signal?to?Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Root Mean Square Error (RMSE) respectively. Experimental results show that the proposed algorithm retains the detailed information of the CT image while removing noise, and improves the brightness and contrast of the image, which helps doctors analyze the patient’s condition more accurately.
Reference | Related Articles | Metrics
Evolution relationship extraction of emergency based on attention-based bidirectional long short-term memory network model
WEN Chang, LIU Yu, GU Jinguang
Journal of Computer Applications    2019, 39 (6): 1646-1651.   DOI: 10.11772/j.issn.1001-9081.2018122533
Abstract388)      PDF (973KB)(366)       Save
Concerning the problem that existing study of emergency relationship extraction mostly focuses on causality extraction while neglects other evolutions, in order to improve the completeness of information extracted in emergency decision-making, a method based on attention-based bidirectional Long Short-Term Memory (LSTM) model was used to extract the evolution relationship. Firstly, combined with the concept of evolution relationship in emergencies, an evolution relationship model was constructed and given the formal definition, and the emergency corpus was labeled according to the model. Then, a bidirectional LSTM network was built and attention mechanism was introduced to calculate the attention probability to highlight the importance of the key words in the text. Finally, the built network model was used to extract the evolution relationship. In the evolution relationship extraction experiments, compared with the existing causality extraction methods, the proposed method can extract more sufficient evolution relationship for emergency decision-making. At the same time, the average precision, recall and F1_score are respectively increased by 7.3%, 6.7% and 7.0%, which effectively improves the accuracy of the evolution relationship extraction of emergency.
Reference | Related Articles | Metrics
Real-time face detection for mobile devices with optical flow estimation
WEI Zhenyu, WEN Chang, XIE Kai, HE Jianbiao
Journal of Computer Applications    2018, 38 (4): 1146-1150.   DOI: 10.11772/j.issn.1001-9081.2017092154
Abstract699)      PDF (836KB)(369)       Save
To improve the face detection accuracy of mobile devices, a new real-time face detection algorithm for mobile devices was proposed. The improved Viola-Jones was used for a quick region segmentation to improve segmentation precision without decreasing segmentation speed. At the same time, the optical flow estimation method was used to propagate the features of discrete keyframes extracted by the sub-network of a convolution neural network to other non-keyframes, which increased the efficiency of convolution neural network. Experiments were conducted on YouTube video face database, a self-built one-minute face video database of 20 people and the real test items at different resolutions. The results show that the running speed is between 2.35 frames per second and 22.25 frames per second, reaching the average face detection level; the recall rate of face detection is increased from 65.93% to 82.5%-90.8% at rate of 10% false alarm, approaching the detection accuracy of convolution neural network, which satisfies the speed and accuracy requirements for real-time face detection of mobile devices.
Reference | Related Articles | Metrics